Neural Networks

Prepared By Dr. Reza Mousavi, Assistant Professor of Data Science and Business Analytics, UNC- Charlotte
Based on resources mentioned in the References.

1. Introduction

1.1 What is a Neural Network?

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true about ANNs as well. The basic idea behind a neural network is to simulate (copy in a simplified but reasonably faithful way) lots of densely interconnected brain cells inside a computer so we can get it to learn things, recognize patterns, and make decisions in a human-like way. The amazing thing about a neural network is that we don’t have to program it to learn explicitly: it learns all by itself, just like human brain!

1.2 Historical background

Neural network simulations appear to be a recent development. However, this field was established before modern computers were made. It has also survived at least one major setback and several eras.

Many important advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few researchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.

The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits. But the technology available at that time did not allow them to do too much.

1.3 Why use neural networks?

Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an “expert” in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer “what if” questions. Other advantages include:

Adaptive Learning: An ability to learn how to do tasks based on the data given for training or initial experience.

Self-Organization: An ANN can create its own organization or representation of the information it receives during the learning time.

Real-Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. This is one of the reasons why graphical processing units (GPUs) are known to outperform CPUs in ANN tasks.

Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

1.4 Neural networks versus conventional algorithms

Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach; i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known, the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don’t exactly know how to do.

Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly (Think about introducing bad role models to kids!). The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.

On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to be solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong, it is due to a software or hardware failure.

Neural networks and conventional algorithmic computers are not in competition but complement each other. There are certain tasks that are more suited to an algorithmic approach like arithmetic operations and there are certain tasks that are more suited to neural networks. Even more, a large number of tasks require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.

1.5 Real and artificial neural neworks

Before we go any further, it’s also worth noting some jargon. Strictly speaking, neural networks produced this way are called artificial neural networks (or ANNs) to differentiate them from the real neural networks (collections of interconnected brain cells) we find inside our brains. You might also see neural networks referred to by names like connectionist machines (the field is also called connectionism), parallel distributed processors (PDP), thinking machines, and so on. However here, we’re going to use the term “neural network” throughout and always use it to mean “artificial neural network.”

2. Components of Neural Networks

A typical neural network has anything from a few dozen to hundreds, thousands, or even millions of artificial neurons called units arranged in a series of layers, each of which connects to the layers on either side. Some of them, known as input units, are designed to receive various forms of information from the outside world that the network will attempt to learn about, recognize, or otherwise process. Other units sit on the opposite side of the network and signal how it responds to the information it’s learned; those are known as output units. In between the input units and output units are one or more layers of hidden units, which together form the majority of the artificial brain. Most neural networks are fully connected, which means each hidden unit and each output unit is connected to every unit in the layers either side. The connections between one unit and another are represented by a number called a weight, which can be either positive (if one unit excites another) or negative (if one unit suppresses or inhibits another). The higher the weight, the more influence one unit has on another. (This corresponds to the way actual brain cells trigger one another across tiny gaps called synapses.)

As shown in the illustration above, a fully connected neural network is made up of input units (red), hidden units (blue), and output units (yellow), with all the units connected to all the units in the layers either side. Inputs are fed in from the left, activate the hidden units in the middle, and make outputs feed out from the right. The strength (weight) of the connection between any two units is gradually adjusted as the network learns.

3. How Does ANN Work?

To understand how ANN works, we need to learn about the learning process in human brain. Human brain consists of billions of neurons. These neurons are connected together via synapses (as shown in the figure below). In the nervous system, a synapse is a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron. Therefore, these synapses play a major role in transferring information across neurons. Let’s imagine that we are only working with electrical synapses. To transfer information from one neuron to another neuron, the synapse would pass electric current. By adjusting this electric current, the synapses can send a variety of information from one neuron to another (for instance a strong current may mean pain but a weak current may relate to another feeling- This is an over simplification of the concept for us to learn how ANN works. The actual mechanism is more complex).

Human brain basically learns by changing the strength of the synaptic connection. In ANN, we use the same logic. Only this time, we use numerical weights instead of electric current. Imagine that we have the following data set and we want to use it to create an ANN.

We want to use the three Xs to predict Y. If we closely look at the data, we notice that output Y is 1 if at least two of the three inputs are equal to 1. We want to use ANN to implement this. To do this, we need to use a function called the activation function (I). Then we need to adjust the weights in the activation function to be able to create the desired outcome. An ANN model is an assembly of inter‐connected nodes and weighted links. Output node sums up each of its input values according to the weights of its links and compares the weighted sum against some threshold t.

It is worth noting that there may be several hidden layers in ANN:

There are several known activation functions:

Information flows through a neural network in two ways: when it is learning (being trained) or when it is operating normally (after being trained). The patterns of information are fed into the network via the input units, which trigger the layers of hidden units, and these in turn arrive at the output units. This common design is called a feed-forward network. Not all units “fire” all the time. Each unit receives inputs from the units to its left, and the inputs are multiplied by the weights of the connections they travel along. Every unit adds up all the inputs it receives in this way and (in the simplest type of network) if the sum is more than a certain threshold, the unit “fires” and triggers the units it’s connected to (those on its right). For a neural network to learn, there has to be an element of feedback involved—just as children learn by being told what they’re doing right or wrong. In fact, we all use feedback, all the time. Think back to when you first learned to play a game like ten-pin bowling. As you picked up the heavy ball and rolled it down the alley, your brain watched how quickly the ball moved and the line it followed, and noted how close you came to knocking down the skittles. Next time it was your turn, you remembered what you would done wrong before, modified your movements accordingly, and hopefully threw the ball a bit better. So you used feedback to compare the outcome you wanted with what actually happened, figured out the difference between the two (error), and used that to change what you did next time (“I need to throw it harder,” “I need to roll slightly more to the left,” “I need to let go later,” and so on). The bigger the difference between the intended and actual outcome (error), the more radically you would have altered your moves (adjust weights).

Neural networks learn things in exactly the same way, typically by a feedback process called backpropagation (sometimes abbreviated as “backprop”). This involves comparing the output a network produces with the output it was meant to produce, and using the difference between them to modify the weights of the connections between the units in the network, working from the output units through the hidden units to the input units—going backward. In time, backpropagation causes the network to learn, reducing the difference between actual and intended output to the point where the two exactly coincide, so the network figures things out exactly as it should.

4. Different Types of Neural Networks:

There are many different types of neural networks. The main two categories of neural networks are feedforward and recurrent (or feedback) neural networks. The feedforward neural network was the first and simplest type. In this network the information moves only from the input layer directly through any hidden layers to the output layer without cycles/loops. The feedforward neural networks have their own taxonomy (different types of feedforward NNs). On the other hand, recurrent neural networks (RNNs) propagate data forward, but also backwards, from later processing stages to earlier stages. RNNs can be used as general sequence processors. Similar to feedforward NNs, there are many different types of RFFs.

The main difference between the recurrent neural network and feedforward neural network, as implied by their names, is that in the feedforward neural network the connections between the units (neurons) do not form a cycle (loop). A feedforward neural network with multiple layers is known as deep neural network (DNN) or multi-layer perceptron (MLP).

Note: A well-known type of NNs called Convolutional Neural Networks (CNNs) are used for image proseesing. For information about CNN architecture please refer to: http://cs231n.github.io/convolutional-networks/

Quiz: Do we use backpropagation in feedforward NNs? The answer is yes. Backpropagation is a process by which many algorithms including NNs (both recurrent and feedforward NNs) adjust the model parameters (in NNs these are the weights that need to be adjusted).

Fjodor van Veen from The Asimov Institute has done a great job in creating a visualization that explains the differences among different types of neural networks. Please refer to http://www.asimovinstitute.org/neural-network-zoo/ for more information about different types of neural networks.

5. How Does ANN Work in Practice?

Once the network has been trained with enough learning examples, it reaches a point where we can present it with an entirely new set of inputs that it has never seen before, and then see how it responds. Presenting an ANN with new inputs to obtain the outputs (predictions) is called the connection scheme. For example, suppose we’ve been training an ANN by showing it lots of pictures of chairs and tables, represented in some appropriate way it can understand, and telling it whether each one is a chair or a table. After showing it, let’s say, 25 different chairs and 25 different tables, we feed it a picture of some new design it’s not encountered before—let’s say a chaise longue—and see what happens. Depending on how we’ve trained it, it’ll attempt to categorize the new example as either a chair or a table, generalizing on the basis of its past experience—just like a human. Here, we have taught a computer how to recognize furniture!

That doesn’t mean to say a neural network can just “look” at pieces of furniture and instantly respond to them in meaningful ways; it’s not behaving like a person. Consider the example we’ve just given: the network is not actually looking at pieces of furniture. The inputs to a network are essentially binary numbers: each input unit is either switched on or switched off. So if we had five input units, we could feed in information about five different characteristics of different chairs using binary (yes/no) answers. The questions might be 1) Does it have a back? 2) Does it have a top? 3) Does it have soft upholstery? 4) Can we sit on it comfortably for long periods of time? 5) Can we put lots of things on top of it? A typical chair would then present as Yes, No, Yes, Yes, No or 10110 in binary, while a typical table might be No, Yes, No, No, Yes or 01001. So, during the learning phase, the network is simply looking at lots of numbers like 10110 and 01001 and learning that some mean chair (which might be an output of 1) while others mean table (an output of 0).

More Resources: The content in this document was obtained from:

1- Tan et al. (2006) Introduction to Data Mining.

2- https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html

3- Woodford, Chris. (2011) Neural networks.

4- http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/deep-learning.html

5- http://scialert.net/fulltext/?doi=itj.2007.526.533

6- http://www.asimovinstitute.org/neural-network-zoo/

5. R Code: ANN

5.1. Build the classifier

Step I:

Data Preparation

getwd()
setwd("/Users/Saman/Dropbox (UNC Charlotte)/spring_2017/6211_Reza/week5/")
# Import data:
mydata <- read.table("salary_data.csv", sep=",", header=T, strip.white = T, na.strings = c("NA","NaN","","?"))

# Explore data:
nrow(mydata)
## [1] 32561
summary(mydata)
##       age                   workclass         fnlwgt       
##  Min.   :17.00   Private         :22696   Min.   :  12285  
##  1st Qu.:28.00   Self-emp-not-inc: 2541   1st Qu.: 117827  
##  Median :37.00   Local-gov       : 2093   Median : 178356  
##  Mean   :38.58   State-gov       : 1298   Mean   : 189778  
##  3rd Qu.:48.00   Self-emp-inc    : 1116   3rd Qu.: 237051  
##  Max.   :90.00   (Other)         :  981   Max.   :1484705  
##                  NA's            : 1836                    
##         education     education.num                martital.status 
##  HS-grad     :10501   Min.   : 1.00   Divorced             : 4443  
##  Some-college: 7291   1st Qu.: 9.00   Married-AF-spouse    :   23  
##  Bachelors   : 5355   Median :10.00   Married-civ-spouse   :14976  
##  Masters     : 1723   Mean   :10.08   Married-spouse-absent:  418  
##  Assoc-voc   : 1382   3rd Qu.:12.00   Never-married        :10683  
##  11th        : 1175   Max.   :16.00   Separated            : 1025  
##  (Other)     : 5134                   Widowed              :  993  
##            occupation            relationship                   race      
##  Prof-specialty : 4140   Husband       :13193   Amer-Indian-Eskimo:  311  
##  Craft-repair   : 4099   Not-in-family : 8305   Asian-Pac-Islander: 1039  
##  Exec-managerial: 4066   Other-relative:  981   Black             : 3124  
##  Adm-clerical   : 3770   Own-child     : 5068   Other             :  271  
##  Sales          : 3650   Unmarried     : 3446   White             :27816  
##  (Other)        :10993   Wife          : 1568                             
##  NA's           : 1843                                                    
##      sex         capital.gain    capital.loss    hours.per.week 
##  Female:10771   Min.   :    0   Min.   :   0.0   Min.   : 1.00  
##  Male  :21790   1st Qu.:    0   1st Qu.:   0.0   1st Qu.:40.00  
##                 Median :    0   Median :   0.0   Median :40.00  
##                 Mean   : 1078   Mean   :  87.3   Mean   :40.44  
##                 3rd Qu.:    0   3rd Qu.:   0.0   3rd Qu.:45.00  
##                 Max.   :99999   Max.   :4356.0   Max.   :99.00  
##                                                                 
##        native.country  salary.class 
##  United-States:29170   <=50K:24720  
##  Mexico       :  643   >50K : 7841  
##  Philippines  :  198                
##  Germany      :  137                
##  Canada       :  121                
##  (Other)      : 1709                
##  NA's         :  583
mydata$fnlwgt <- NULL
mydata$education.num <- NULL
mydata$relationship <- NULL
mydata$salary.class <- ifelse(mydata$salary.class == "<=50K", 0, 1)
## Note: Since categorical variables enter into statistical models differently than continuous variables, storing data as factors insures that the modeling functions will treat such data correctly:
mydata$education <- as.factor(mydata$education)
mydata$martital.status <- as.factor(mydata$martital.status)
mydata$occupation <- as.factor(mydata$occupation)
mydata$race <- as.factor(mydata$race)
mydata$sex <- as.factor(mydata$sex)
mydata$native.country <- as.factor(mydata$native.country)
mydata$salary.class <- as.factor(mydata$salary.class)
summary(mydata)
##       age                   workclass            education    
##  Min.   :17.00   Private         :22696   HS-grad     :10501  
##  1st Qu.:28.00   Self-emp-not-inc: 2541   Some-college: 7291  
##  Median :37.00   Local-gov       : 2093   Bachelors   : 5355  
##  Mean   :38.58   State-gov       : 1298   Masters     : 1723  
##  3rd Qu.:48.00   Self-emp-inc    : 1116   Assoc-voc   : 1382  
##  Max.   :90.00   (Other)         :  981   11th        : 1175  
##                  NA's            : 1836   (Other)     : 5134  
##               martital.status            occupation   
##  Divorced             : 4443   Prof-specialty : 4140  
##  Married-AF-spouse    :   23   Craft-repair   : 4099  
##  Married-civ-spouse   :14976   Exec-managerial: 4066  
##  Married-spouse-absent:  418   Adm-clerical   : 3770  
##  Never-married        :10683   Sales          : 3650  
##  Separated            : 1025   (Other)        :10993  
##  Widowed              :  993   NA's           : 1843  
##                  race           sex         capital.gain  
##  Amer-Indian-Eskimo:  311   Female:10771   Min.   :    0  
##  Asian-Pac-Islander: 1039   Male  :21790   1st Qu.:    0  
##  Black             : 3124                  Median :    0  
##  Other             :  271                  Mean   : 1078  
##  White             :27816                  3rd Qu.:    0  
##                                            Max.   :99999  
##                                                           
##   capital.loss    hours.per.week        native.country  salary.class
##  Min.   :   0.0   Min.   : 1.00   United-States:29170   0:24720     
##  1st Qu.:   0.0   1st Qu.:40.00   Mexico       :  643   1: 7841     
##  Median :   0.0   Median :40.00   Philippines  :  198               
##  Mean   :  87.3   Mean   :40.44   Germany      :  137               
##  3rd Qu.:   0.0   3rd Qu.:45.00   Canada       :  121               
##  Max.   :4356.0   Max.   :99.00   (Other)      : 1709               
##                                   NA's         :  583

There are multiple packages for building neural networks in R. One of the popular packages is “nnet”. This package can be only used for building feedforward neural networks with a single hidden layer. For bilding recurrent neural networks, we can use package “rnn”.

# Install packages required for random forest:
install.packages("nnet")
# Load packages required for random forest:
library(nnet)

Step II:

Build the classifier:

set.seed(32) 
# Since the data is large, we sample the first 5k observations:
train_data <- head(mydata, n = 5000)
train_data <- train_data[complete.cases(train_data),] # We only keep the observations with no missing values. 
train_data$salary.class <- as.factor(train_data$salary.class) # Make sure that the target (salary.class) is a factor vairable.
summary(train_data)
##       age                   workclass           education   
##  Min.   :17.00   Private         :3369   HS-grad     :1511  
##  1st Qu.:28.00   Self-emp-not-inc: 378   Some-college:1004  
##  Median :37.00   Local-gov       : 324   Bachelors   : 784  
##  Mean   :38.52   State-gov       : 189   Masters     : 237  
##  3rd Qu.:47.00   Self-emp-inc    : 175   Assoc-voc   : 201  
##  Max.   :90.00   Federal-gov     : 144   11th        : 172  
##                  (Other)         :   1   (Other)     : 671  
##               martital.status           occupation  
##  Divorced             : 640   Exec-managerial: 609  
##  Married-AF-spouse    :   4   Prof-specialty : 606  
##  Married-civ-spouse   :2128   Craft-repair   : 603  
##  Married-spouse-absent:  56   Sales          : 578  
##  Never-married        :1477   Adm-clerical   : 567  
##  Separated            : 146   Other-service  : 484  
##  Widowed              : 129   (Other)        :1133  
##                  race          sex        capital.gain    capital.loss    
##  Amer-Indian-Eskimo:  43   Female:1463   Min.   :    0   Min.   :   0.00  
##  Asian-Pac-Islander: 130   Male  :3117   1st Qu.:    0   1st Qu.:   0.00  
##  Black             : 464                 Median :    0   Median :   0.00  
##  Other             :  25                 Mean   : 1058   Mean   :  96.43  
##  White             :3918                 3rd Qu.:    0   3rd Qu.:   0.00  
##                                          Max.   :99999   Max.   :2547.00  
##                                                                           
##  hours.per.week        native.country salary.class
##  Min.   : 1.00   United-States:4163   0:3422      
##  1st Qu.:40.00   Mexico       : 102   1:1158      
##  Median :40.00   Canada       :  24               
##  Mean   :41.15   Germany      :  21               
##  3rd Qu.:45.00   Philippines  :  20               
##  Max.   :99.00   England      :  16               
##                  (Other)      : 234
ann <- nnet(salary.class ~ ., data=train_data, size=10, maxit=1000) # Size is the number of units (nodes) in the hidden layer.
## # weights:  921
## initial  value 2404.612612 
## iter  10 value 2291.060262
## iter  20 value 2238.828241
## iter  30 value 2038.204197
## iter  40 value 1872.566505
## iter  50 value 1670.186235
## iter  60 value 1553.081699
## iter  70 value 1505.397042
## iter  80 value 1476.241148
## iter  90 value 1442.509522
## iter 100 value 1434.669538
## iter 110 value 1429.394797
## iter 120 value 1424.002755
## iter 130 value 1418.945910
## iter 140 value 1412.001357
## iter 150 value 1409.437464
## iter 160 value 1405.525681
## iter 170 value 1388.532075
## iter 180 value 1372.118893
## iter 190 value 1363.898548
## iter 200 value 1355.957712
## iter 210 value 1350.302981
## iter 220 value 1346.498282
## iter 230 value 1341.655118
## iter 240 value 1338.063449
## iter 250 value 1337.231851
## iter 260 value 1335.920466
## iter 270 value 1335.308365
## iter 280 value 1334.372084
## iter 290 value 1332.968937
## iter 300 value 1331.370381
## iter 310 value 1330.043122
## iter 320 value 1329.681769
## iter 330 value 1329.149028
## iter 340 value 1328.609742
## iter 350 value 1327.769116
## iter 360 value 1327.270378
## iter 370 value 1326.871506
## iter 380 value 1326.575476
## iter 390 value 1326.267135
## iter 400 value 1325.935738
## iter 410 value 1325.541518
## iter 420 value 1325.011689
## iter 430 value 1324.311152
## iter 440 value 1322.874486
## iter 450 value 1322.043596
## iter 460 value 1321.642745
## iter 470 value 1321.310639
## iter 480 value 1320.956434
## iter 490 value 1320.639876
## iter 500 value 1320.610156
## iter 510 value 1320.552781
## iter 520 value 1320.489373
## iter 530 value 1320.412676
## iter 540 value 1320.336595
## iter 550 value 1320.287154
## final  value 1320.266490 
## converged
summary(ann) # In the output, b represents the bias associated with a node, h1 represents hidden layer node 1, i1 represents input node 1 (i.e., input variable 1), o  represents the output node.
## a 90-10-1 network with 921 weights
## options were - entropy fitting 
##   b->h1  i1->h1  i2->h1  i3->h1  i4->h1  i5->h1  i6->h1  i7->h1  i8->h1 
##    0.04    0.72    0.43    0.32   -0.46    0.64    0.36    0.50    0.24 
##  i9->h1 i10->h1 i11->h1 i12->h1 i13->h1 i14->h1 i15->h1 i16->h1 i17->h1 
##   -0.15    0.22   -0.25    0.16    0.38   -0.32    0.35    0.42   -0.11 
## i18->h1 i19->h1 i20->h1 i21->h1 i22->h1 i23->h1 i24->h1 i25->h1 i26->h1 
##    0.17    0.58    0.26    0.55   -0.06   -0.50    0.11    0.67   -0.05 
## i27->h1 i28->h1 i29->h1 i30->h1 i31->h1 i32->h1 i33->h1 i34->h1 i35->h1 
##    0.06    0.06    0.16   -0.15    0.45    0.58    0.46   -0.08    0.38 
## i36->h1 i37->h1 i38->h1 i39->h1 i40->h1 i41->h1 i42->h1 i43->h1 i44->h1 
##    0.04   -0.49    0.42    0.68   -0.15    0.36   -0.51    0.05    0.46 
## i45->h1 i46->h1 i47->h1 i48->h1 i49->h1 i50->h1 i51->h1 i52->h1 i53->h1 
##   -0.39    0.23    0.67    0.38   -0.61    0.66    0.63    0.28   -0.21 
## i54->h1 i55->h1 i56->h1 i57->h1 i58->h1 i59->h1 i60->h1 i61->h1 i62->h1 
##   -0.67   -0.70    0.47    0.52    0.25    0.64    0.41   -0.51    0.31 
## i63->h1 i64->h1 i65->h1 i66->h1 i67->h1 i68->h1 i69->h1 i70->h1 i71->h1 
##   -0.43   -0.41    0.70    0.48   -0.54    0.62   -0.13    0.47   -0.34 
## i72->h1 i73->h1 i74->h1 i75->h1 i76->h1 i77->h1 i78->h1 i79->h1 i80->h1 
##   -0.07   -0.66   -0.54   -0.39    0.34   -0.54   -0.17   -0.17    0.04 
## i81->h1 i82->h1 i83->h1 i84->h1 i85->h1 i86->h1 i87->h1 i88->h1 i89->h1 
##    0.06    0.05   -0.69   -0.45    0.05   -0.07    0.50   -0.40    0.50 
## i90->h1 
##   -0.62 
##   b->h2  i1->h2  i2->h2  i3->h2  i4->h2  i5->h2  i6->h2  i7->h2  i8->h2 
##   -0.25   -0.52   -0.24   -0.32   -0.61   -0.34   -0.14   -0.55   -0.57 
##  i9->h2 i10->h2 i11->h2 i12->h2 i13->h2 i14->h2 i15->h2 i16->h2 i17->h2 
##    0.20    0.50   -0.11   -0.66   -0.25    0.41   -0.04    0.33   -0.61 
## i18->h2 i19->h2 i20->h2 i21->h2 i22->h2 i23->h2 i24->h2 i25->h2 i26->h2 
##    0.32   -0.38   -0.51    0.39   -0.47   -0.10    0.62    0.44   -0.38 
## i27->h2 i28->h2 i29->h2 i30->h2 i31->h2 i32->h2 i33->h2 i34->h2 i35->h2 
##   -0.32   -0.16   -0.48    0.60   -0.67    0.12   -0.16   -0.41    0.12 
## i36->h2 i37->h2 i38->h2 i39->h2 i40->h2 i41->h2 i42->h2 i43->h2 i44->h2 
##   -0.18    0.07   -0.58   -0.45    0.07   -0.40    0.00    0.22    0.14 
## i45->h2 i46->h2 i47->h2 i48->h2 i49->h2 i50->h2 i51->h2 i52->h2 i53->h2 
##    0.21   -0.04   -0.02   -0.54   -0.52   -0.62    0.17    0.63    0.19 
## i54->h2 i55->h2 i56->h2 i57->h2 i58->h2 i59->h2 i60->h2 i61->h2 i62->h2 
##    0.65   -0.28    0.00   -0.43   -0.53   -0.20   -0.51    0.59   -0.37 
## i63->h2 i64->h2 i65->h2 i66->h2 i67->h2 i68->h2 i69->h2 i70->h2 i71->h2 
##   -0.70    0.69    0.18   -0.30   -0.22   -0.66    0.58    0.28   -0.54 
## i72->h2 i73->h2 i74->h2 i75->h2 i76->h2 i77->h2 i78->h2 i79->h2 i80->h2 
##    0.66   -0.54    0.30   -0.66    0.15    0.15    0.07    0.51   -0.50 
## i81->h2 i82->h2 i83->h2 i84->h2 i85->h2 i86->h2 i87->h2 i88->h2 i89->h2 
##    0.46   -0.59    0.09   -0.10    0.41    0.14   -0.39   -0.51   -0.28 
## i90->h2 
##   -0.66 
##   b->h3  i1->h3  i2->h3  i3->h3  i4->h3  i5->h3  i6->h3  i7->h3  i8->h3 
##  158.05  -24.09   79.29   -0.50   30.67 -123.73   30.14   11.10    0.14 
##  i9->h3 i10->h3 i11->h3 i12->h3 i13->h3 i14->h3 i15->h3 i16->h3 i17->h3 
##   37.31    4.57    0.75    9.07   88.10  137.75   -5.35  158.39  -33.39 
## i18->h3 i19->h3 i20->h3 i21->h3 i22->h3 i23->h3 i24->h3 i25->h3 i26->h3 
##  -59.48 -109.60  -87.04   -0.35  -22.05  -44.02    0.92   -4.50   51.65 
## i27->h3 i28->h3 i29->h3 i30->h3 i31->h3 i32->h3 i33->h3 i34->h3 i35->h3 
##   64.54   44.87    0.56    0.50  -28.93 -221.14   25.53  -30.47  114.33 
## i36->h3 i37->h3 i38->h3 i39->h3 i40->h3 i41->h3 i42->h3 i43->h3 i44->h3 
##  -83.52    0.06 -105.62  -89.65 -210.46  -72.23 -119.41   39.53   17.69 
## i45->h3 i46->h3 i47->h3 i48->h3 i49->h3 i50->h3 i51->h3 i52->h3 i53->h3 
##   15.64   75.08   21.09   -0.72   -0.65    8.79  -53.38   29.08    1.89 
## i54->h3 i55->h3 i56->h3 i57->h3 i58->h3 i59->h3 i60->h3 i61->h3 i62->h3 
##   -0.69    7.52   -0.70    1.37   11.41   -1.22   14.25   87.45    1.54 
## i63->h3 i64->h3 i65->h3 i66->h3 i67->h3 i68->h3 i69->h3 i70->h3 i71->h3 
##   -0.35    0.35   -0.72    6.44   -0.31   10.86   -0.83  -96.70  -21.12 
## i72->h3 i73->h3 i74->h3 i75->h3 i76->h3 i77->h3 i78->h3 i79->h3 i80->h3 
##    1.92    1.28    2.07   55.85    0.51    1.46    0.32    2.28    1.29 
## i81->h3 i82->h3 i83->h3 i84->h3 i85->h3 i86->h3 i87->h3 i88->h3 i89->h3 
##    4.49    1.44   -0.31    3.86   -2.39   -3.39    0.69   88.40    0.70 
## i90->h3 
##    0.04 
##   b->h4  i1->h4  i2->h4  i3->h4  i4->h4  i5->h4  i6->h4  i7->h4  i8->h4 
##    0.42   -0.50   -0.35   -0.33    0.35   -0.50    0.31    0.22   -0.13 
##  i9->h4 i10->h4 i11->h4 i12->h4 i13->h4 i14->h4 i15->h4 i16->h4 i17->h4 
##    0.06   -0.52    0.35    0.14   -0.14    0.34   -0.02   -0.09   -0.63 
## i18->h4 i19->h4 i20->h4 i21->h4 i22->h4 i23->h4 i24->h4 i25->h4 i26->h4 
##   -0.08   -0.69    0.18    0.64    0.07   -0.43   -0.33   -0.66    0.29 
## i27->h4 i28->h4 i29->h4 i30->h4 i31->h4 i32->h4 i33->h4 i34->h4 i35->h4 
##   -0.15   -0.34   -0.25    0.35    0.33    0.70   -0.29   -0.65    0.36 
## i36->h4 i37->h4 i38->h4 i39->h4 i40->h4 i41->h4 i42->h4 i43->h4 i44->h4 
##   -0.29    0.55    0.14    0.13   -0.43    0.19    0.64    0.59    0.58 
## i45->h4 i46->h4 i47->h4 i48->h4 i49->h4 i50->h4 i51->h4 i52->h4 i53->h4 
##    0.24    0.16   -0.30    0.53    0.50   -0.72   -0.49    0.21    0.42 
## i54->h4 i55->h4 i56->h4 i57->h4 i58->h4 i59->h4 i60->h4 i61->h4 i62->h4 
##   -0.16    0.64   -0.21   -0.61    0.24    0.09    0.47    0.44    0.38 
## i63->h4 i64->h4 i65->h4 i66->h4 i67->h4 i68->h4 i69->h4 i70->h4 i71->h4 
##   -0.68    0.65   -0.28   -0.25   -0.26   -0.42    0.26   -0.35   -0.18 
## i72->h4 i73->h4 i74->h4 i75->h4 i76->h4 i77->h4 i78->h4 i79->h4 i80->h4 
##    0.39    0.06    0.68    0.35   -0.12    0.17    0.13   -0.06   -0.10 
## i81->h4 i82->h4 i83->h4 i84->h4 i85->h4 i86->h4 i87->h4 i88->h4 i89->h4 
##   -0.33    0.20    0.15   -0.37   -0.58   -0.45   -0.66    0.65    0.22 
## i90->h4 
##    0.25 
##   b->h5  i1->h5  i2->h5  i3->h5  i4->h5  i5->h5  i6->h5  i7->h5  i8->h5 
## -244.54    3.94  120.53   -0.37   98.77  -24.90  194.35  -23.45   -1.46 
##  i9->h5 i10->h5 i11->h5 i12->h5 i13->h5 i14->h5 i15->h5 i16->h5 i17->h5 
##  -46.10 -120.92   -6.72  -33.81  -50.87  -75.54  -33.31   99.18   28.09 
## i18->h5 i19->h5 i20->h5 i21->h5 i22->h5 i23->h5 i24->h5 i25->h5 i26->h5 
##  -22.68  -33.98  -10.36   -2.17    0.76  -83.31   -0.32   99.40 -110.35 
## i27->h5 i28->h5 i29->h5 i30->h5 i31->h5 i32->h5 i33->h5 i34->h5 i35->h5 
##  -21.46   18.00  -35.02   -0.33   18.80  -53.02  -56.98  -94.36 -105.41 
## i36->h5 i37->h5 i38->h5 i39->h5 i40->h5 i41->h5 i42->h5 i43->h5 i44->h5 
## -103.80   -3.96   67.72  154.31  -46.59   -6.02  -99.67  -50.64  -46.74 
## i45->h5 i46->h5 i47->h5 i48->h5 i49->h5 i50->h5 i51->h5 i52->h5 i53->h5 
##  -17.29   73.45   -8.73    0.91   -1.04    2.48   35.64  -21.03    0.68 
## i54->h5 i55->h5 i56->h5 i57->h5 i58->h5 i59->h5 i60->h5 i61->h5 i62->h5 
##   -3.73  -11.19  -10.74   -1.24   -0.05   -0.86 -202.29   -1.40  -17.30 
## i63->h5 i64->h5 i65->h5 i66->h5 i67->h5 i68->h5 i69->h5 i70->h5 i71->h5 
##   -5.76   -0.50    0.53  -45.84   -0.47  113.70   -8.77    2.80   -0.44 
## i72->h5 i73->h5 i74->h5 i75->h5 i76->h5 i77->h5 i78->h5 i79->h5 i80->h5 
##  -26.79   48.43   -0.81  -80.53   -5.11   -0.48   -0.41    8.29    0.57 
## i81->h5 i82->h5 i83->h5 i84->h5 i85->h5 i86->h5 i87->h5 i88->h5 i89->h5 
##   -6.42  -12.27   -0.69  -12.58  -11.60   50.51   -0.83  -39.95   16.53 
## i90->h5 
##   -0.63 
##   b->h6  i1->h6  i2->h6  i3->h6  i4->h6  i5->h6  i6->h6  i7->h6  i8->h6 
##  -24.84    0.04   -4.11   -0.34   -2.78   52.34   -1.58    0.81  -16.55 
##  i9->h6 i10->h6 i11->h6 i12->h6 i13->h6 i14->h6 i15->h6 i16->h6 i17->h6 
##   -1.56    6.08   -9.05   18.84   -4.74    6.71   10.86   13.53    8.53 
## i18->h6 i19->h6 i20->h6 i21->h6 i22->h6 i23->h6 i24->h6 i25->h6 i26->h6 
##   40.53    7.93    8.25   -4.96    8.73    7.22   -4.63    5.63  -39.07 
## i27->h6 i28->h6 i29->h6 i30->h6 i31->h6 i32->h6 i33->h6 i34->h6 i35->h6 
##    6.92    1.61    4.78   -6.95    0.93    4.95    7.38    5.40    0.56 
## i36->h6 i37->h6 i38->h6 i39->h6 i40->h6 i41->h6 i42->h6 i43->h6 i44->h6 
##    1.61  -15.70    2.61    4.63    1.13   26.44   11.47    1.33    0.29 
## i45->h6 i46->h6 i47->h6 i48->h6 i49->h6 i50->h6 i51->h6 i52->h6 i53->h6 
##   -5.43   -0.39   -1.46    0.69   -0.53    0.23   10.95    0.14  -29.49 
## i54->h6 i55->h6 i56->h6 i57->h6 i58->h6 i59->h6 i60->h6 i61->h6 i62->h6 
##    2.51   -0.72   -2.14  -18.56   30.28   -0.25    1.11  -12.89 -102.45 
## i63->h6 i64->h6 i65->h6 i66->h6 i67->h6 i68->h6 i69->h6 i70->h6 i71->h6 
##  -18.97    0.14   -0.88  -11.28    0.11    1.51    8.84    3.99    3.49 
## i72->h6 i73->h6 i74->h6 i75->h6 i76->h6 i77->h6 i78->h6 i79->h6 i80->h6 
##   81.77   59.82  -16.32  -20.80  -10.29  -12.92  -19.87   22.42   -0.01 
## i81->h6 i82->h6 i83->h6 i84->h6 i85->h6 i86->h6 i87->h6 i88->h6 i89->h6 
##    1.14  -30.30   -4.59    1.57    2.08    8.63   -5.99    4.27   -0.57 
## i90->h6 
##  -19.08 
##   b->h7  i1->h7  i2->h7  i3->h7  i4->h7  i5->h7  i6->h7  i7->h7  i8->h7 
##   66.85    0.97  -28.07   -0.55   37.51  -17.42   50.02   17.23    0.54 
##  i9->h7 i10->h7 i11->h7 i12->h7 i13->h7 i14->h7 i15->h7 i16->h7 i17->h7 
##    8.04    0.68    0.62   18.94   -0.53   -6.37    4.83    5.67  -18.26 
## i18->h7 i19->h7 i20->h7 i21->h7 i22->h7 i23->h7 i24->h7 i25->h7 i26->h7 
##   16.69    9.66   38.87    0.10   -6.89  -32.86   -0.17  -32.67   -0.54 
## i27->h7 i28->h7 i29->h7 i30->h7 i31->h7 i32->h7 i33->h7 i34->h7 i35->h7 
##   47.78  -54.76   -0.51    0.20   10.35   -7.57   -2.81    0.36    7.57 
## i36->h7 i37->h7 i38->h7 i39->h7 i40->h7 i41->h7 i42->h7 i43->h7 i44->h7 
##    0.14    0.56  -14.30   40.55   -0.07   12.60   -7.11   13.74    9.27 
## i45->h7 i46->h7 i47->h7 i48->h7 i49->h7 i50->h7 i51->h7 i52->h7 i53->h7 
##    0.62   36.28  -13.09   -0.04    0.11    1.39   10.04    0.53   -0.46 
## i54->h7 i55->h7 i56->h7 i57->h7 i58->h7 i59->h7 i60->h7 i61->h7 i62->h7 
##   -0.52   -0.69   -0.69    0.20   -0.23   -0.23    0.17   31.07   -0.31 
## i63->h7 i64->h7 i65->h7 i66->h7 i67->h7 i68->h7 i69->h7 i70->h7 i71->h7 
##   -0.58    0.65   -0.66    0.32    0.44   10.13    0.58   -0.11    0.25 
## i72->h7 i73->h7 i74->h7 i75->h7 i76->h7 i77->h7 i78->h7 i79->h7 i80->h7 
##   -0.51    0.10   -0.61  -10.15   -0.17    0.65   -0.08    2.82   -0.28 
## i81->h7 i82->h7 i83->h7 i84->h7 i85->h7 i86->h7 i87->h7 i88->h7 i89->h7 
##    0.69    0.46   -0.34   -0.16    0.21    0.30    0.44   22.85   -0.50 
## i90->h7 
##   -0.44 
##   b->h8  i1->h8  i2->h8  i3->h8  i4->h8  i5->h8  i6->h8  i7->h8  i8->h8 
##   -0.68   -0.60   -0.09    0.12    0.10   -0.01    0.49   -0.15   -0.25 
##  i9->h8 i10->h8 i11->h8 i12->h8 i13->h8 i14->h8 i15->h8 i16->h8 i17->h8 
##    0.53   -0.04   -0.01   -0.12   -0.12    0.56   -0.41   -0.62   -0.21 
## i18->h8 i19->h8 i20->h8 i21->h8 i22->h8 i23->h8 i24->h8 i25->h8 i26->h8 
##   -0.60    0.15   -0.13   -0.05   -0.66    0.42   -0.70    0.15    0.23 
## i27->h8 i28->h8 i29->h8 i30->h8 i31->h8 i32->h8 i33->h8 i34->h8 i35->h8 
##    0.27    0.01    0.36   -0.43    0.45   -0.66    0.63   -0.14   -0.39 
## i36->h8 i37->h8 i38->h8 i39->h8 i40->h8 i41->h8 i42->h8 i43->h8 i44->h8 
##   -0.34   -0.14   -0.34   -0.56   -0.29   -0.59   -0.09   -0.32   -0.39 
## i45->h8 i46->h8 i47->h8 i48->h8 i49->h8 i50->h8 i51->h8 i52->h8 i53->h8 
##   -0.60   -0.07   -0.51    0.52   -0.20   -0.14   -0.14   -0.20   -0.23 
## i54->h8 i55->h8 i56->h8 i57->h8 i58->h8 i59->h8 i60->h8 i61->h8 i62->h8 
##    0.64    0.35    0.22    0.39   -0.67    0.38   -0.54    0.57    0.28 
## i63->h8 i64->h8 i65->h8 i66->h8 i67->h8 i68->h8 i69->h8 i70->h8 i71->h8 
##   -0.21   -0.19    0.60   -0.17   -0.45   -0.25    0.54    0.02    0.25 
## i72->h8 i73->h8 i74->h8 i75->h8 i76->h8 i77->h8 i78->h8 i79->h8 i80->h8 
##   -0.68   -0.05   -0.55    0.13   -0.21    0.70    0.33   -0.28   -0.08 
## i81->h8 i82->h8 i83->h8 i84->h8 i85->h8 i86->h8 i87->h8 i88->h8 i89->h8 
##   -0.36   -0.44    0.25   -0.51   -0.50   -0.70    0.27   -0.05    0.24 
## i90->h8 
##   -0.69 
##   b->h9  i1->h9  i2->h9  i3->h9  i4->h9  i5->h9  i6->h9  i7->h9  i8->h9 
##  179.39   -0.11   46.02   -0.55   19.10   11.87   83.94   56.97   29.37 
##  i9->h9 i10->h9 i11->h9 i12->h9 i13->h9 i14->h9 i15->h9 i16->h9 i17->h9 
## -153.11  -63.85   78.62  120.03 -156.29   45.57   -3.54    8.52  -78.89 
## i18->h9 i19->h9 i20->h9 i21->h9 i22->h9 i23->h9 i24->h9 i25->h9 i26->h9 
##  -80.44   -1.58  -90.30   47.19 -117.80  -14.06    3.97  -64.76 -177.52 
## i27->h9 i28->h9 i29->h9 i30->h9 i31->h9 i32->h9 i33->h9 i34->h9 i35->h9 
##   80.11    3.07   71.88    0.86  -27.02  -23.84   50.14   37.99  -27.39 
## i36->h9 i37->h9 i38->h9 i39->h9 i40->h9 i41->h9 i42->h9 i43->h9 i44->h9 
##   13.45   44.94  -23.93    7.34  -14.53  -22.13   37.85 -234.62 -168.02 
## i45->h9 i46->h9 i47->h9 i48->h9 i49->h9 i50->h9 i51->h9 i52->h9 i53->h9 
## -202.01 -160.64   -9.61    0.55   -0.25   -0.08   13.25    8.06   65.56 
## i54->h9 i55->h9 i56->h9 i57->h9 i58->h9 i59->h9 i60->h9 i61->h9 i62->h9 
##  148.90  -14.39    8.79  120.02   72.69   12.20  -84.93   10.93   29.29 
## i63->h9 i64->h9 i65->h9 i66->h9 i67->h9 i68->h9 i69->h9 i70->h9 i71->h9 
##   55.36   -0.41   24.53   13.66   -0.17  -49.19   24.08    5.97 -160.72 
## i72->h9 i73->h9 i74->h9 i75->h9 i76->h9 i77->h9 i78->h9 i79->h9 i80->h9 
##  -17.52  228.42  202.35  -37.29   10.46   17.82   58.31   69.66   15.91 
## i81->h9 i82->h9 i83->h9 i84->h9 i85->h9 i86->h9 i87->h9 i88->h9 i89->h9 
##    9.62  123.95   22.37   -7.67   64.45  124.71   38.03    8.22   -5.89 
## i90->h9 
##   38.22 
##   b->h10  i1->h10  i2->h10  i3->h10  i4->h10  i5->h10  i6->h10  i7->h10 
##    -0.07    -0.81    -0.61     0.65    -0.10    -0.25     0.36     0.64 
##  i8->h10  i9->h10 i10->h10 i11->h10 i12->h10 i13->h10 i14->h10 i15->h10 
##     0.33     0.68    -0.36     0.49    -0.68    -0.22    -0.37     0.68 
## i16->h10 i17->h10 i18->h10 i19->h10 i20->h10 i21->h10 i22->h10 i23->h10 
##     0.34    -0.26    -0.62     0.25    -0.43     0.49     0.61    -0.46 
## i24->h10 i25->h10 i26->h10 i27->h10 i28->h10 i29->h10 i30->h10 i31->h10 
##     0.38    -0.55    -0.03     0.63     0.00     0.55     0.04    -0.42 
## i32->h10 i33->h10 i34->h10 i35->h10 i36->h10 i37->h10 i38->h10 i39->h10 
##    -0.20    -0.01    -0.60     0.56     0.18    -0.38    -0.64    -0.20 
## i40->h10 i41->h10 i42->h10 i43->h10 i44->h10 i45->h10 i46->h10 i47->h10 
##     0.57    -0.42     0.30    -0.16    -0.66    -0.52    -0.16    -0.02 
## i48->h10 i49->h10 i50->h10 i51->h10 i52->h10 i53->h10 i54->h10 i55->h10 
##    -0.56     0.53    -0.16    -0.17     0.07     0.03    -0.61    -0.46 
## i56->h10 i57->h10 i58->h10 i59->h10 i60->h10 i61->h10 i62->h10 i63->h10 
##     0.61     0.03     0.54    -0.51    -0.69     0.52    -0.23    -0.63 
## i64->h10 i65->h10 i66->h10 i67->h10 i68->h10 i69->h10 i70->h10 i71->h10 
##     0.65     0.33    -0.30    -0.13     0.44     0.01     0.03    -0.66 
## i72->h10 i73->h10 i74->h10 i75->h10 i76->h10 i77->h10 i78->h10 i79->h10 
##    -0.28    -0.24     0.38     0.01     0.00    -0.29     0.47     0.13 
## i80->h10 i81->h10 i82->h10 i83->h10 i84->h10 i85->h10 i86->h10 i87->h10 
##     0.17    -0.16    -0.48    -0.27     0.55     0.30     0.39     0.21 
## i88->h10 i89->h10 i90->h10 
##    -0.11    -0.47    -0.69 
##    b->o   h1->o   h2->o   h3->o   h4->o   h5->o   h6->o   h7->o   h8->o 
##  -28.72  -52.32   -0.29   -2.63   10.63   84.67    3.12   -5.96  -10.10 
##   h9->o  h10->o 
##   -3.00   24.13
print(ann)
## a 90-10-1 network with 921 weights
## inputs: age workclassLocal-gov workclassNever-worked workclassPrivate workclassSelf-emp-inc workclassSelf-emp-not-inc workclassState-gov workclassWithout-pay education11th education12th education1st-4th education5th-6th education7th-8th education9th educationAssoc-acdm educationAssoc-voc educationBachelors educationDoctorate educationHS-grad educationMasters educationPreschool educationProf-school educationSome-college martital.statusMarried-AF-spouse martital.statusMarried-civ-spouse martital.statusMarried-spouse-absent martital.statusNever-married martital.statusSeparated martital.statusWidowed occupationArmed-Forces occupationCraft-repair occupationExec-managerial occupationFarming-fishing occupationHandlers-cleaners occupationMachine-op-inspct occupationOther-service occupationPriv-house-serv occupationProf-specialty occupationProtective-serv occupationSales occupationTech-support occupationTransport-moving raceAsian-Pac-Islander raceBlack raceOther raceWhite sexMale capital.gain capital.loss hours.per.week native.countryCanada native.countryChina native.countryColumbia native.countryCuba native.countryDominican-Republic native.countryEcuador native.countryEl-Salvador native.countryEngland native.countryFrance native.countryGermany native.countryGreece native.countryGuatemala native.countryHaiti native.countryHoland-Netherlands native.countryHonduras native.countryHong native.countryHungary native.countryIndia native.countryIran native.countryIreland native.countryItaly native.countryJamaica native.countryJapan native.countryLaos native.countryMexico native.countryNicaragua native.countryOutlying-US(Guam-USVI-etc) native.countryPeru native.countryPhilippines native.countryPoland native.countryPortugal native.countryPuerto-Rico native.countryScotland native.countrySouth native.countryTaiwan native.countryThailand native.countryTrinadad&Tobago native.countryUnited-States native.countryVietnam native.countryYugoslavia 
## output(s): salary.class 
## options were - entropy fitting

5.2. Apply the classifier (Connection scheme)

Now we can use the classifier to make predictions:

test_data = tail(mydata, n = 1000) # Sample the last 1000 observations of mydata and save it as test_data
test_data <- test_data[complete.cases(test_data),] # Remove missing values from test_data.
predicted_values <- predict(ann, test_data,type= "raw") # Use the gbm classifier to make the predictions
final_data <- cbind(test_data, predicted_values) # Add the predictions to test_data
colnames <- c(colnames(test_data),"prob.one") # Add the new column names to the original column names 
write.table(final_data, file="ann_predictions.csv", sep=",", row.names=F, col.names = colnames) # write the csv file of the output

Note: It is highly recommended to normalize (scale) the training and test data before training and applying an ANN.

5.3. Building deep learning (and other) models in H2O:

Please open a browser and go to: https://www.h2o.ai/download/ Under “H2O”, click on “Latest Stable Release” and then follow the instructions. We can run H2O through its GUI (It’s called H2O Flow) or through R or Python. Please refer to http://docs.h2o.ai/h2o/latest-stable/h2o-docs/index.html for more information.